67 research outputs found

    Towards the Integration of an Intuitionistic First-Order Prover into Coq

    Full text link
    An efficient intuitionistic first-order prover integrated into Coq is useful to replay proofs found by external automated theorem provers. We propose a two-phase approach: An intuitionistic prover generates a certificate based on the matrix characterization of intuitionistic first-order logic; the certificate is then translated into a sequent-style proof.Comment: In Proceedings HaTT 2016, arXiv:1606.0542

    Mechanising Complexity Theory: The Cook-Levin Theorem in Coq

    Get PDF

    The Weak Call-By-Value {\lambda}-Calculus is Reasonable for Both Time and Space

    Full text link
    We study the weak call-by-value λ\lambda-calculus as a model for computational complexity theory and establish the natural measures for time and space -- the number of beta-reductions and the size of the largest term in a computation -- as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas [STOC~84]. More precisely, we show that, using those measures, Turing machines and the weak call-by-value λ\lambda-calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in space for all computations that terminate in (encodings) of 'true' or 'false'. We consider this result as a solution to the long-standing open problem, explicitly posed by Accattoli [ENTCS~18], of whether the natural measures for time and space of the λ\lambda-calculus are reasonable, at least in case of weak call-by-value evaluation. Our proof relies on a hybrid of two simulation strategies of reductions in the weak call-by-value λ\lambda-calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular, all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing, similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in time, but the space consumption might require an additional factor of logn\log n, which is essentially due to the size of the pointers required for this strategy. Our main contribution is the construction and verification of a space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and a polynomial overhead in time

    The weak call-by-value λ-calculus is reasonable for both time and space

    Get PDF
    We study the weak call-by-value -calculus as a model for computational complexity theory and establish the natural measures for time and space Ð the number of beta-reduction steps and the size of the largest term in a computation Ð as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas from 1984. More precisely, we show that, using those measures, Turing machines and the weak call-by-value -calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in space for all computations terminating in (encodings of) łtruež or łfalsež. The simulation yields that standard complexity classes like , NP, PSPACE, or EXP can be defined solely in terms of the -calculus, but does not cover sublinear time or space. Note that our measures still have the well-known size explosion property, where the space measure of a computation can be exponentially bigger than its time measure. However, our result implies that this exponential gap disappears once complexity classes are considered instead of concrete computations. We consider this result a first step towards a solution for the long-standing open problem of whether the natural measures for time and space of the -calculus are reasonable. Our proof for the weak call-by-value -calculus is the first proof of reasonability (including both time and space) for a functional language based on natural measures and enables the formal verification of complexity-theoretic proofs concerning complexity classes, both on paper and in proof assistants. The proof idea relies on a hybrid of two simulation strategies of reductions in the weak call-by-value -calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular, all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing, similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in time, but the space consumption might require an additional factor of log, which is essentially due to the size of the pointers required for this strategy. Our main contribution is the construction and verification of a space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and a polynomial overhead in time

    A Mechanised Proof of the Time Invariance Thesis for the Weak Call-By-Value ?-Calculus

    Get PDF
    The weak call-by-value ?-calculus ?and Turing machines can simulate each other with a polynomial overhead in time. This time invariance thesis for L, where the number of ?-reductions of a computation is taken as its time complexity, is the culmination of a 25-years line of research, combining work by Blelloch, Greiner, Dal Lago, Martini, Accattoli, Forster, Kunze, Roth, and Smolka. The present paper presents a mechanised proof of the time invariance thesis for L, constituting the first mechanised equivalence proof between two standard models of computation covering time complexity. The mechanisation builds on an existing framework for the extraction of Coq functions to L and contributes a novel Hoare logic framework for the verification of Turing machines. The mechanised proof of the time invariance thesis establishes ?as model for future developments of mechanised computational complexity theory regarding time. It can also be seen as a non-trivial but elementary case study of time-complexity-preserving translations between a functional language and a sequential machine model. As a by-product, we obtain a mechanised many-one equivalence proof of the halting problems for ?and Turing machines, which we contribute to the Coq Library of Undecidability Proofs

    Distributed Performance Measurement and Usability Assessment of the Tor Anonymization Network

    Get PDF
    While the Internet increasingly permeates everyday life of individuals around the world, it becomes crucial to prevent unauthorized collection and abuse of personalized information. Internet anonymization software such as Tor is an important instrument to protect online privacy. However, due to the performance overhead caused by Tor, many Internet users refrain from using it. This causes a negative impact on the overall privacy provided by Tor, since it depends on the size of the user community and availability of shared resources. Detailed measurements about the performance of Tor are crucial for solving this issue. This paper presents comparative experiments on Tor latency and throughput for surfing to 500 popular websites from several locations around the world during the period of 28 days. Furthermore, we compare these measurements to critical latency thresholds gathered from web usability research, including our own user studies. Our results indicate that without massive future optimizations of Tor performance, it is unlikely that a larger part of Internet users would adopt it for everyday usage. This leads to fewer resources available to the Tor community than theoretically possible, and increases the exposure of privacy-concerned individuals. Furthermore, this could lead to an adoption barrier of similar privacy-enhancing technologies for a Future Internet. View Full-Tex

    Synthetic Kolmogorov Complexity in Coq

    Get PDF
    International audienceWe present a generalised, constructive, and machine-checked approach to Kolmogorov complexity in the constructive type theory underlying the Coq proof assistant. By proving that nonrandom numbers form a simple predicate, we obtain elegant proofs of undecidability for random and nonrandom numbers and a proof of uncomputability of Kolmogorov complexity. We use a general and abstract definition of Kolmogorov complexity and subsequently instantiate it to several definitions frequently found in the literature. Whereas textbook treatments of Kolmogorov complexity usually rely heavily on classical logic and the axiom of choice, we put emphasis on the constructiveness of all our arguments, however without blurring their essence. We first give a high-level proof idea using classical logic, which can be formalised with Markov's principle via folklore techniques we subsequently explain. Lastly, we show a strategy how to eliminate Markov's principle from a certain class of computability proofs, rendering all our results fully constructive. All our results are machine-checked by the Coq proof assistant, which is enabled by using a synthetic approach to computability: rather than formalising a model of computation, which is well-known to introduce a considerable overhead, we abstractly assume a universal function, allowing the proofs to focus on the mathematical essence

    Assessing pollen beetle dynamics in diversified agricultural landscapes with reduced pesticide management strategies: Exploring the potential of digital yellow water traps for continuous, high-resolution monitoring in oilseed rape

    Get PDF
    The European Farm to Fork strategy strives to reduce pesticide use and risk by 50% by 2030, preserving agricultural productivity, biodiversity, and human health. Novel research on crop diversification and new field arrangements, supported by digital technologies, offers sustainable innovations for pest control. This study evaluates digital yellow water traps, equipped with a camera and associated artificial intelligence model for continuous pollen beetle monitoring in diversified agricultural landscapes. Data were collected in oilseed rape from three harvest years (2021-2023) at the experimental site patchCROP, a landscape experiment established to study the effects of spatial and temporal crop diversification measures on yield, ecosystem services, and biodiversity. In patchCROP, crops were planted in smaller, 0.5 ha (72 × 72 m) squares called "patches" with different pesticide management strategies and were compared to surrounding commercial fields. The digital yellow water traps and AI were evaluated and found to be useful for gauging pollen beetle immigration into the crop. Across all years, higher insect pest pressure was recorded in the patches compared to commercial fields but did not necessarily compromise yields. Implementation of pesticide management strategies, including targeted insecticide applications at specific insect pest thresholds, were not associated with reduced yields in patches with flower strips. Future studies should consider examining the role of field size and alternative diversification approaches to fine-tune insecticide reduction strategies at the landscape scale

    Lessons Learned: The Multifaceted Field of (Digital) Neighborhood Development

    Get PDF
    In a cross-national project, 14 neighborhoods from Germany, Austria and Switzerland were accompanied on their way to digitally supported neighborhood work. This paper discusses general requirements, choosing a suitable digital tool, the implementation process as well as the challenges faced by the various stakeholders. The following factors have been found to play a major role in sustainable neighborhood work: good fit with overall development strategy, interplay between online neighborhood work and physical interactions, strong existing neighborhood management structures, strategic planning of digitalization activities, start-up funding for innovation activities, and above all, the presence of a committed person or team as well as interesting content to attract users. Depending on the neighborhood, self-managed and individualistic solutions are preferred to generic and/or commercial solutions. There is no ‘fit-for-all’ path to sustainable digitally supported neighborhoods
    corecore